8 research outputs found
Unsupervised robust nonparametric learning of hidden community properties
We consider learning of fundamental properties of communities in large noisy
networks, in the prototypical situation where the nodes or users are split into
two classes according to a binary property, e.g., according to their opinions
or preferences on a topic. For learning these properties, we propose a
nonparametric, unsupervised, and scalable graph scan procedure that is, in
addition, robust against a class of powerful adversaries. In our setup, one of
the communities can fall under the influence of a knowledgeable adversarial
leader, who knows the full network structure, has unlimited computational
resources and can completely foresee our planned actions on the network. We
prove strong consistency of our results in this setup with minimal assumptions.
In particular, the learning procedure estimates the baseline activity of normal
users asymptotically correctly with probability 1; the only assumption being
the existence of a single implicit community of asymptotically negligible
logarithmic size. We provide experiments on real and synthetic data to
illustrate the performance of our method, including examples with adversaries.Comment: Experiments with new types of adversaries adde
CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning
Program synthesis or code generation aims to generate a program that
satisfies a problem specification. Recent approaches using large-scale
pretrained language models (LMs) have shown promising results, yet they have
some critical limitations. In particular, they often follow a standard
supervised fine-tuning procedure to train a code generation model only from the
pairs of natural-language problem descriptions and ground-truth programs. Such
paradigm largely ignores some important but potentially useful signals in the
problem specification such as unit tests, which thus often results in poor
performance when solving complex unseen coding tasks. To address the
limitations, we propose "CodeRL", a new framework for program synthesis tasks
through pretrained LMs and deep reinforcement learning (RL). Specifically,
during training, we treat the code-generating LM as an actor network, and
introduce a critic network that is trained to predict the functional
correctness of generated programs and provide dense feedback signals to the
actor. During inference, we introduce a new generation procedure with a
critical sampling strategy that allows a model to automatically regenerate
programs based on feedback from example unit tests and critic scores. For the
model backbones, we extended the encoder-decoder architecture of CodeT5 with
enhanced learning objectives, larger model sizes, and better pretraining data.
Our method not only achieves new SOTA results on the challenging APPS
benchmark, but also shows strong zero-shot transfer capability with new SOTA
results on the simpler MBPP benchmark
CodeTF: One-stop Transformer Library for State-of-the-art Code LLM
Code intelligence plays a key role in transforming modern software
engineering. Recently, deep learning-based models, especially Transformer-based
large language models (LLMs), have demonstrated remarkable potential in
tackling these tasks by leveraging massive open-source code data and
programming language features. However, the development and deployment of such
models often require expertise in both machine learning and software
engineering, creating a barrier for the model adoption. In this paper, we
present CodeTF, an open-source Transformer-based library for state-of-the-art
Code LLMs and code intelligence. Following the principles of modular design and
extensible framework, we design CodeTF with a unified interface to enable rapid
access and development across different types of models, datasets and tasks.
Our library supports a collection of pretrained Code LLM models and popular
code benchmarks, including a standardized interface to train and serve code
LLMs efficiently, and data features such as language-specific parsers and
utility functions for extracting code attributes. In this paper, we describe
the design principles, the architecture, key modules and components, and
compare with other related library tools. Finally, we hope CodeTF is able to
bridge the gap between machine learning/generative AI and software engineering,
providing a comprehensive open-source solution for developers, researchers, and
practitioners.Comment: Ongoing work - Draft Previe
Nonlinear system identification using a cuckoo search optimized adaptive Hammerstein model
An attempt has been made in this paper to model a nonlinear system using a Hammerstein model. The Hammerstein model considered in this paper is a functional link artificial neural network (FLANN) in cascade with an adaptive infinite impulse response (IIR) filter. In order to avoid local optima issues caused by conventional gradient descent training strategies, the model has been trained using a cuckoo search algorithm (CSA), which is a recently proposed stochastic algorithm. Modeling accuracy of the proposed scheme has been compared with that obtained using other popular evolutionary computing algorithms for the Hammerstein model. Enhanced modeling capability of the CSA based scheme is evident from the simulation results.by Akhilesh Gotmare, Rohan Patidar and Nithin V. Georg
Swarm and evolutionary computing algorithms for system identification and filter design: a comprehensive review
An exhaustive review on the use of structured stochastic search approaches towards system identification and digital filter design is presented in this paper. In particular, the paper focuses on the identification of various systems using infinite impulse response adaptive filters and Hammerstein models as well as on the estimation of chaotic systems. In addition to presenting a comprehensive review on the various swarm and evolutionary computing schemes employed for system identification as well as digital filter design, the paper is also envisioned to act as a quick reference for a few popular evolutionary computing algorithms.by Akhilesh Gotmare, Sankha Subhra Bhattacharjee, Rohan Patidar and Nithin V. Georg